延时电阻率断层扫描(ERT)是一种流行的地球物理方法,可从电势差测量中估算三维(3D)通透性场。传统的反转和数据同化方法用于将这些数据吸收到水域模型中以估计渗透性。由于不适合性和维度的诅咒,现有的反转策略提供了较差的估计值和3D渗透率场的低分辨率。深度学习的最新进展为我们提供了强大的算法来克服这一挑战。本文提出了一个深度学习(DL)框架,以估算从延时ERT数据中的3D地下渗透性。为了测试所提出的框架的可行性,我们在模拟数据上训练了启用DL的逆模型。基于水域物理学的地下过程模型用于生成此合成数据以进行深度学习分析。结果表明,拟议的弱监督学习可以捕获3D渗透性领域中的显着空间特征。在数量上,在标记的训练,验证和测试数据集的平均平方平方误差(就自然日志而言)小于0.5。 R2评分(全局度量)大于0.75,每个单元格(本地度量)的百分比误差小于10%。最后,在计算成本方面的额外好处是,所提出的基于DL的反向模型至少比运行正向模型快的速度(104)倍。请注意,传统倒置可能需要多个前向模型模拟(例如,按10到1000的顺序),这非常昂贵。这种计算节省(O(105)-O(107))使提出的基于DL的逆模型具有对地下成像和实时ERT监视应用程序的吸引力,这是由于快速而相当准确的渗透性场估计。
translated by 谷歌翻译
A "heart attack" or myocardial infarction (MI), occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is Cardiovascular Magnetic Resonance Imaging (MRI), with intravenously administered gadolinium-based contrast (late gadolinium enhancement). However, no "gold standard" fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. This has the potential to reduce the uncertainty due to the technical variability across labs and inherent problems of the data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by the Atrous Spatial Pyramid Pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: i) background, ii) heart muscle, iii) blood and iv) scar areas. New models were compared with state-of-art models and manual quantification. Our models showed favorable performance in global segmentation and scar tissue detection relative to state-of-the-art work, including a four-fold better performance in matching scar pixels to contours produced by clinicians.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
In this paper, a complete framework for Autonomous Self Driving is implemented. LIDAR, Camera and IMU sensors are used together. The entire data communication is managed using Robot Operating System which provides a robust platform for implementation of Robotics Projects. Jetson Nano is used to provide powerful on-board processing capabilities. Sensor fusion is performed on the data received from the different sensors to improve the accuracy of the decision making and inferences that we derive from the data. This data is then used to create a localized map of the environment. In this step, the position of the vehicle is obtained with respect to the Mapping done using the sensor data.The different SLAM techniques used for this purpose are Hector Mapping and GMapping which are widely used mapping techniques in ROS. Apart from SLAM that primarily uses LIDAR data, Visual Odometry is implemented using a Monocular Camera. The sensor fused data is then used by Adaptive Monte Carlo Localization for car localization. Using the localized map developed, Path Planning techniques like "TEB planner" and "Dynamic Window Approach" are implemented for autonomous navigation of the vehicle. The last step in the Project is the implantation of Control which is the final decision making block in the pipeline that gives speed and steering data for the navigation that is compatible with Ackermann Kinematics. The implementation of such a control block under a ROS framework using the three sensors, viz, LIDAR, Camera and IMU is a novel approach that is undertaken in this project.
translated by 谷歌翻译
Verifying the input-output relationships of a neural network so as to achieve some desired performance specification is a difficult, yet important, problem due to the growing ubiquity of neural nets in many engineering applications. We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks. Specifically, we interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon that shapes distributions of initial states, and use characteristic functions to propagate the distribution of the input data through the network. Using the inverse Fourier transform, we obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected given any random point from the input set. The proposed approach does not require distributions to have well-defined moments or moment generating functions. We demonstrate our proposed approach on two examples, and compare its performance to related approaches.
translated by 谷歌翻译
We provide a brief, and inevitably incomplete overview of the use of Machine Learning (ML) and other AI methods in astronomy, astrophysics, and cosmology. Astronomy entered the big data era with the first digital sky surveys in the early 1990s and the resulting Terascale data sets, which required automating of many data processing and analysis tasks, for example the star-galaxy separation, with billions of feature vectors in hundreds of dimensions. The exponential data growth continued, with the rise of synoptic sky surveys and the Time Domain Astronomy, with the resulting Petascale data streams and the need for a real-time processing, classification, and decision making. A broad variety of classification and clustering methods have been applied for these tasks, and this remains a very active area of research. Over the past decade we have seen an exponential growth of the astronomical literature involving a variety of ML/AI applications of an ever increasing complexity and sophistication. ML and AI are now a standard part of the astronomical toolkit. As the data complexity continues to increase, we anticipate further advances leading towards a collaborative human-AI discovery.
translated by 谷歌翻译
对图像分类器的最新基于模型的攻击压倒性地集中在单对象(即单个主体对象)图像上。与此类设置不同,我们解决了一个更实用的问题,即使用多对象(即多个主导对象)图像生成对抗性扰动,因为它们代表了大多数真实世界场景。我们的目标是设计一种攻击策略,该策略可以通过利用此类图像中固有的本地贴片差异来从此类自然场景中学习(例如,对象上的局部贴片在“人”上的局部贴片与在交通场景中的对象`自行车'之间的差异)。我们的关键想法是:为了误解对抗性的多对象图像,图像中的每个本地贴片都会使受害者分类器感到困惑。基于此,我们提出了一种新颖的生成攻击(称为局部斑块差异或LPD攻击),其中新颖的对比损失函数使用上述多对象场景特征空间的局部差异来优化扰动生成器。通过各种受害者卷积神经网络的各种实验,我们表明我们的方法在不同的白色盒子和黑色盒子设置下进行评估时,我们的方法优于基线生成攻击,具有高度可转移的扰动。
translated by 谷歌翻译
基于优化的元学习旨在学习初始化,以便在一些梯度更新中可以学习新的看不见的任务。模型不可知的元学习(MAML)是一种包括两个优化回路的基准算法。内部循环致力于学习一项新任务,并且外循环导致元定义。但是,Anil(几乎没有内部环)算法表明,功能重用是MAML快速学习的替代方法。因此,元定义阶段使MAML用于特征重用,并消除了快速学习的需求。与Anil相反,我们假设可能需要在元测试期间学习新功能。从非相似分布中进行的一项新的看不见的任务将需要快速学习,并重用现有功能。在本文中,我们调用神经网络的宽度深度二元性,其中,我们通过添加额外的计算单元(ACU)来增加网络的宽度。 ACUS可以在元测试任务中学习新的原子特征,而相关的增加宽度有助于转发通行证中的信息传播。新学习的功能与最后一层的现有功能相结合,用于元学习。实验结果表明,我们提出的MAC方法的表现优于现有的非相似任务分布的Anil算法,约为13%(5次任务设置)
translated by 谷歌翻译
制作对抗性攻击的大多数方法都集中在具有单个主体对象的场景上(例如,来自Imagenet的图像)。另一方面,自然场景包括多个在语义上相关的主要对象。因此,探索设计攻击策略至关重要,这些攻击策略超出了在单对象场景上学习或攻击单对象受害者分类器。由于其固有的属性将扰动向未知模型的强大可传递性强,因此本文介绍了使用生成模型对多对象场景的对抗性攻击的第一种方法。为了代表输入场景中不同对象之间的关系,我们利用开源的预训练的视觉语言模型剪辑(对比语言图像 - 预训练),并动机利用语言中的编码语义来利用编码的语义空间与视觉空间一起。我们称这种攻击方法生成对抗性多对象场景攻击(GAMA)。 GAMA展示了剪辑模型作为攻击者的工具的实用性,以训练可强大的扰动发电机为多对象场景。使用联合图像文本功能来训练发电机,我们表明GAMA可以在各种攻击环境中制作有效的可转移扰动,以欺骗受害者分类器。例如,GAMA触发的错误分类比在黑框设置中的最新生成方法高出约16%,在黑框设置中,分类器体系结构和攻击者的数据分布都与受害者不同。我们的代码将很快公开提供。
translated by 谷歌翻译
开发有效的自动分类器将真实来源与工件分开,对于宽场光学调查的瞬时随访至关重要。在图像差异过程之后,从减法伪像的瞬态检测鉴定是此类分类器的关键步骤,称为真实 - 博格斯分类问题。我们将自我监督的机器学习模型,深入的自组织地图(DESOM)应用于这个“真实的模拟”分类问题。 DESOM结合了自动编码器和一个自组织图以执行聚类,以根据其维度降低的表示形式来区分真实和虚假的检测。我们使用32x32归一化检测缩略图作为底部的输入。我们展示了不同的模型训练方法,并发现我们的最佳DESOM分类器显示出6.6%的检测率,假阳性率为1.5%。 Desom提供了一种更细微的方法来微调决策边界,以确定与其他类型的分类器(例如在神经网络或决策树上构建的)结合使用时可能进行的实际检测。我们还讨论了DESOM及其局限性的其他潜在用法。
translated by 谷歌翻译